随着图像识别中深度学习模型的快速发展和使用的增加,安全成为其在安全至关重要系统中的部署的主要关注点。由于深度学习模型的准确性和鲁棒性主要归因于训练样本的纯度,因此,深度学习体系结构通常容易受到对抗性攻击的影响。对抗性攻击通常是通过对正常图像的微妙扰动而获得的,正常图像对人类最不可感知,但可能会严重混淆最新的机器学习模型。我们提出了一个名为Apudae的框架,利用DeNoing AutoCoders(DAES)通过以自适应方式使用这些样品来纯化这些样本,从而提高了已攻击目标分类器网络的分类准确性。我们还展示了如何自适应地使用DAE,而不是直接使用它们,而是进一步提高分类精度,并且更强大,可以设计自适应攻击以欺骗它们。我们在MNIST,CIFAR-10,Imagenet数据集上展示了我们的结果,并展示了我们的框架(Apudae)如何在净化对手方面提供可比性和在大多数情况下的基线方法。我们还设计了专门设计的自适应攻击,以攻击我们的净化模型,并展示我们的防御方式如何强大。
translated by 谷歌翻译
随着在图像识别中的快速进步和深度学习模型的使用,安全成为他们在安全关键系统中部署的主要关注点。由于深度学习模型的准确性和稳健性主要归因于训练样本的纯度,因此深度学习架构通常易于对抗性攻击。通过对正常图像进行微妙的扰动来获得对抗性攻击,这主要是人类,但可以严重混淆最先进的机器学习模型。什么特别的智能扰动或噪声在正常图像上添加了它导致深神经网络的灾难性分类?使用统计假设检测,我们发现条件变形自身偏析器(CVAE)令人惊讶地擅长检测难以察觉的图像扰动。在本文中,我们展示了CVAE如何有效地用于检测对图像分类网络的对抗攻击。我们展示了我们的成果,Cifar-10数据集,并展示了我们的方法如何为先前的方法提供可比性,以检测对手,同时不会与嘈杂的图像混淆,其中大多数现有方法都摇摇欲坠。
translated by 谷歌翻译
In large-scale machine learning, recent works have studied the effects of compressing gradients in stochastic optimization in order to alleviate the communication bottleneck. These works have collectively revealed that stochastic gradient descent (SGD) is robust to structured perturbations such as quantization, sparsification, and delays. Perhaps surprisingly, despite the surge of interest in large-scale, multi-agent reinforcement learning, almost nothing is known about the analogous question: Are common reinforcement learning (RL) algorithms also robust to similar perturbations? In this paper, we investigate this question by studying a variant of the classical temporal difference (TD) learning algorithm with a perturbed update direction, where a general compression operator is used to model the perturbation. Our main technical contribution is to show that compressed TD algorithms, coupled with an error-feedback mechanism used widely in optimization, exhibit the same non-asymptotic theoretical guarantees as their SGD counterparts. We then extend our results significantly to nonlinear stochastic approximation algorithms and multi-agent settings. In particular, we prove that for multi-agent TD learning, one can achieve linear convergence speedups in the number of agents while communicating just $\tilde{O}(1)$ bits per agent at each time step. Our work is the first to provide finite-time results in RL that account for general compression operators and error-feedback in tandem with linear function approximation and Markovian sampling. Our analysis hinges on studying the drift of a novel Lyapunov function that captures the dynamics of a memory variable introduced by error feedback.
translated by 谷歌翻译
The goal of this paper is to detect objects by exploiting their interrelationships. Rather than relying on predefined and labeled graph structures, we infer a graph prior from object co-occurrence statistics. The key idea of our paper is to model object relations as a function of initial class predictions and co-occurrence priors to generate a graph representation of an image for improved classification and bounding box regression. We additionally learn the object-relation joint distribution via energy based modeling. Sampling from this distribution generates a refined graph representation of the image which in turn produces improved detection performance. Experiments on the Visual Genome and MS-COCO datasets demonstrate our method is detector agnostic, end-to-end trainable, and especially beneficial for rare object classes. What is more, we establish a consistent improvement over object detectors like DETR and Faster-RCNN, as well as state-of-the-art methods modeling object interrelationships.
translated by 谷歌翻译
We present a machine-learning framework to accurately characterize morphologies of Active Galactic Nucleus (AGN) host galaxies within $z<1$. We first use PSFGAN to decouple host galaxy light from the central point source, then we invoke the Galaxy Morphology Network (GaMorNet) to estimate whether the host galaxy is disk-dominated, bulge-dominated, or indeterminate. Using optical images from five bands of the HSC Wide Survey, we build models independently in three redshift bins: low $(0<z<0.25)$, medium $(0.25<z<0.5)$, and high $(0.5<z<1.0)$. By first training on a large number of simulated galaxies, then fine-tuning using far fewer classified real galaxies, our framework predicts the actual morphology for $\sim$ $60\%-70\%$ host galaxies from test sets, with a classification precision of $\sim$ $80\%-95\%$, depending on redshift bin. Specifically, our models achieve disk precision of $96\%/82\%/79\%$ and bulge precision of $90\%/90\%/80\%$ (for the 3 redshift bins), at thresholds corresponding to indeterminate fractions of $30\%/43\%/42\%$. The classification precision of our models has a noticeable dependency on host galaxy radius and magnitude. No strong dependency is observed on contrast ratio. Comparing classifications of real AGNs, our models agree well with traditional 2D fitting with GALFIT. The PSFGAN+GaMorNet framework does not depend on the choice of fitting functions or galaxy-related input parameters, runs orders of magnitude faster than GALFIT, and is easily generalizable via transfer learning, making it an ideal tool for studying AGN host galaxy morphology in forthcoming large imaging survey.
translated by 谷歌翻译
New technologies and the availability of geospatial data have drawn attention to spatio-temporal biases present in society. For example: the COVID-19 pandemic highlighted disparities in the availability of broadband service and its role in the digital divide; the environmental justice movement in the United States has raised awareness to health implications for minority populations stemming from historical redlining practices; and studies have found varying quality and coverage in the collection and sharing of open-source geospatial data. Despite the extensive literature on machine learning (ML) fairness, few algorithmic strategies have been proposed to mitigate such biases. In this paper we highlight the unique challenges for quantifying and addressing spatio-temporal biases, through the lens of use cases presented in the scientific literature and media. We envision a roadmap of ML strategies that need to be developed or adapted to quantify and overcome these challenges -- including transfer learning, active learning, and reinforcement learning techniques. Further, we discuss the potential role of ML in providing guidance to policy makers on issues related to spatial fairness.
translated by 谷歌翻译
与训练数据中心的训练传统机器学习(ML)模型相反,联合学习(FL)训练ML模型,这些模型在资源受限的异质边缘设备上包含的本地数据集上。现有的FL算法旨在为所有参与的设备学习一个单一的全球模型,这对于所有参与培训的设备可能没有帮助,这是由于整个设备的数据的异质性。最近,Hanzely和Richt \'{A} Rik(2020)提出了一种新的配方,以培训个性化的FL模型,旨在平衡传统的全球模型与本地模型之间的权衡,该模型可以使用其私人数据对单个设备进行培训只要。他们得出了一种称为无环梯度下降(L2GD)的新算法,以解决该算法,并表明该算法会在需要更多个性化的情况下,可以改善沟通复杂性。在本文中,我们为其L2GD算法配备了双向压缩机制,以进一步减少本地设备和服务器之间的通信瓶颈。与FL设置中使用的其他基于压缩的算法不同,我们的压缩L2GD算法在概率通信协议上运行,在概率通信协议中,通信不会按固定的时间表进行。此外,我们的压缩L2GD算法在没有压缩的情况下保持与香草SGD相似的收敛速率。为了验证算法的效率,我们在凸和非凸问题上都进行了多种数值实验,并使用各种压缩技术。
translated by 谷歌翻译
在这项研究中,我们将人工智力的普遍增强学习(URL)代理模型扩展到量子环境。经典探索随机知识寻求代理,KL-KSA的实用功能是从密度矩阵上量子信息理论的距离措施。量子处理断层扫描(QPT)算法形成了用于建模环境动态的易解的程序。基于基于算法复杂度以及计算资源复杂性的可变成本函数来选择最佳QPT策略。我们而不是提供机器,我们估计了高级语言的成本指标,以允许现实的实验。整个代理设计封装在自我复制Quine中,基于最佳策略选择方案的预测值突变成本函数。因此,具有帕累托 - 最佳QPT政策的多个代理商使用遗传编程而发展,模仿各种资源权衡的物理理论的发展。这一正式框架被称为量子知识寻求代理(QKSA)。尽管其重要性,但很少有量子强化学习模型与量子机器学习中的电流推力相反。 QKSA是类似于古典URL模型的框架的第一个提议。类似于AIXI-TL如何是SOLOMONOFF通用归纳的资源有限的活动版本,QKSA是一个资源有限的参与观察者框架,用于最近提出的基于量子力学的基于量子学的算法的重建。 QKSA可以应用于仿真和研究量子信息理论的方面。具体地,我们证明它可以用于加速量子变分算法,该算法包括断层重建作为其积分子程序。
translated by 谷歌翻译
In many applications of classifier learning, training data suffers from label noise. Deep networks are learned using huge training data where the problem of noisy labels is particularly relevant. The current techniques proposed for learning deep networks under label noise focus on modifying the network architecture and on algorithms for estimating true labels from noisy labels. An alternate approach would be to look for loss functions that are inherently noise-tolerant. For binary classification there exist theoretical results on loss functions that are robust to label noise. In this paper, we provide some sufficient conditions on a loss function so that risk minimization under that loss function would be inherently tolerant to label noise for multiclass classification problems. These results generalize the existing results on noise-tolerant loss functions for binary classification. We study some of the widely used loss functions in deep networks and show that the loss function based on mean absolute value of error is inherently robust to label noise. Thus standard back propagation is enough to learn the true classifier even under label noise. Through experiments, we illustrate the robustness of risk minimization with such loss functions for learning neural networks.
translated by 谷歌翻译